earnings call
AI 'slop' is transforming social media - and a backlash is brewing
AI'slop' is transforming social media - and a backlash is brewing Théodore remembers the AI slop that tipped him over the edge. The image was of two emaciated, impoverished South Asian children. For some reason, despite their boyish features they have thick beards. One of them had no hands and only one foot. The other was holding a sign saying it's his birthday and asking for likes.
- North America > United States (0.69)
- North America > Central America (0.14)
- Africa (0.05)
- (13 more...)
- Leisure & Entertainment > Sports (0.51)
- Media > News (0.47)
- Information Technology > Services (0.47)
'We excel at every phase of AI': Nvidia CEO quells Wall Street fears of AI bubble amid market selloff
'We excel at every phase of AI': Nvidia CEO quells Wall Street fears of AI bubble amid market selloff Global share markets rose after Nvidia posted third-quarter earnings that beat Wall Street estimates, assuaging for now concerns about whether the high-flying valuations of AI firms had peaked. On Wednesday, all eyes were on Nvidia, the bellwether for the AI industry and the most valuable publicly traded company in the world, with analysts and investors hoping the chipmaker's third-quarter earnings would dampen fears that a bubble was forming in the sector. Jensen Huang, founder and CEO of Nvidia, opened the earnings call with an attempt to dispel those concerns, saying that there was a major transformation happening in AI, and Nvidia was foundational to that transformation. "There's been a lot of talk about an AI bubble," said Huang. "From our vantage point, we see something very different. As a reminder, Nvidia is unlike any other accelerator. We excel at every phase of AI from pre-training to post-training to inference."
- North America > United States > New York > New York County > New York City (0.84)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Information Technology > Hardware (1.00)
- Banking & Finance > Trading (1.00)
Trump's Investment in Intel Is Paying Off
Trump's Investment in Intel Is Paying Off The chipmaker reported higher than expected revenue on Thursday, and its stock price has risen over 90 percent since August. The Trump administration's investment in Intel appears to be paying off so far, but the once-mighty chipmaker still has a long way to climb back to industry dominance. In August, the US government announced it was converting about $9 billion in federal grants that Intel had been issued during the Biden administration into a roughly 10 percent equity stake in the company. During its third-quarter earnings on Thursday--its first financial update since Trump's surprise investment--Intel reported that it earned $13.7 billion in revenue over the past three months, a three percent increase year-over-year. It's the fourth consecutive quarter that Intel has beat revenue guidance.
- North America > United States > California (0.30)
- Asia (0.29)
- North America > United States > Arizona (0.15)
- Europe (0.15)
FinDebate: Multi-Agent Collaborative Intelligence for Financial Analysis
Cai, Tianshi, Li, Guanxu, Han, Nijia, Huang, Ce, Wang, Zimu, Zeng, Changyu, Wang, Yuqi, Zhou, Jingshi, Zhang, Haiyang, Chen, Qi, Pan, Yushan, Wang, Shuihua, Wang, Wei
We introduce FinDebate, a multi-agent framework for financial analysis, integrating collaborative debate with domain-specific Retrieval-Augmented Generation (RAG). Five specialized agents, covering earnings, market, sentiment, valuation, and risk, run in parallel to synthesize evidence into multi-dimensional insights. To mitigate overconfidence and improve reliability, we introduce a safe debate protocol that enables agents to challenge and refine initial conclusions while preserving coherent recommendations. Experimental results, based on both LLM-based and human evaluations, demonstrate the framework's efficacy in producing high-quality analysis with calibrated confidence levels and actionable investment strategies across multiple time horizons.
- Financial News (1.00)
- Press Release (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Multimodal Proposal for an AI-Based Tool to Increase Cross-Assessment of Messages
Castro, Alejandro Álvarez, Ordieres-Meré, Joaquín
Earnings calls represent a uniquely rich and semi-structured source of financial communication, blending scripted managerial commentary with unscripted analyst dialogue. Although recent advances in financial sentiment analysis have integrated multi-modal signals, such as textual content and vocal tone, most systems rely on flat document-level or sentence-level models, failing to capture the layered discourse structure of these interactions. This paper introduces a novel multi-modal framework designed to generate semantically rich and structurally aware embeddings of earnings calls, by encoding them as hierarchical discourse trees. Each node, comprising either a monologue or a question-answer pair, is enriched with emotional signals derived from text, audio, and video, as well as structured metadata including coherence scores, topic labels, and answer coverage assessments. A two-stage transformer architecture is proposed: the first encodes multi-modal content and discourse metadata at the node level using contrastive learning, while the second synthesizes a global embedding for the entire conference. Experimental results reveal that the resulting embeddings form stable, semantically meaningful representations that reflect affective tone, structural logic, and thematic alignment. Beyond financial reporting, the proposed system generalizes to other high-stakes unscripted communicative domains such as tele-medicine, education, and political discourse, offering a robust and explainable approach to multi-modal discourse representation. This approach offers practical utility for downstream tasks such as financial forecasting and discourse evaluation, while also providing a generalizable method applicable to other domains involving high-stakes communication.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Discourse & Dialogue (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
The Sound of Risk: A Multimodal Physics-Informed Acoustic Model for Forecasting Market Volatility and Enhancing Market Interpretability
Chen, Xiaoliang, Yu, Xin, Chang, Le, Jing, Teng, He, Jiashuai, Wang, Ze, Luo, Yangjun, Chen, Xingyu, Liang, Jiayue, Wang, Yuchen, Xie, Jiaying
Information asymmetry in financial markets, often amplified by strategically crafted corporate narratives, undermines the effectiveness of conventional textual analysis. We propose a novel multimodal framework for financial risk assessment that integrates textual sentiment with paralinguistic cues derived from executive vocal tract dynamics in earnings calls. Central to this framework is the Physics-Informed Acoustic Model (PIAM), which applies nonlinear acoustics to robustly extract emotional signatures from raw teleconference sound subject to distortions such as signal clipping. Both acoustic and textual emotional states are projected onto an interpretable three-dimensional Affective State Label (ASL) space-Tension, Stability, and Arousal. Using a dataset of 1,795 earnings calls (approximately 1,800 hours), we construct features capturing dynamic shifts in executive affect between scripted presentation and spontaneous Q&A exchanges. Our key finding reveals a pronounced divergence in predictive capacity: while multimodal features do not forecast directional stock returns, they explain up to 43.8% of the out-of-sample variance in 30-day realized volatility. Importantly, volatility predictions are strongly driven by emotional dynamics during executive transitions from scripted to spontaneous speech, particularly reduced textual stability and heightened acoustic instability from CFOs, and significant arousal variability from CEOs. An ablation study confirms that our multimodal approach substantially outperforms a financials-only baseline, underscoring the complementary contributions of acoustic and textual modalities. By decoding latent markers of uncertainty from verifiable biometric signals, our methodology provides investors and regulators a powerful tool for enhancing market interpretability and identifying hidden corporate uncertainty.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
- Information Technology > Artificial Intelligence > Speech (0.66)
SPGISpeech 2.0: Transcribed multi-speaker financial audio for speaker-tagged transcription
Grossman, Raymond, Park, Taejin, Dhawan, Kunal, Titus, Andrew, Zhi, Sophia, Shchadilova, Yulia, Wang, Weiqing, Balam, Jagadeesh, Ginsburg, Boris
We introduce SPGISpeech 2.0, a dataset suitable for speaker-tagged transcription in the financial domain. SPGISpeech 2.0 improves the diversity of applicable modeling tasks while maintaining the core characteristic of the original SPGISpeech dataset: audio snippets and their corresponding fully formatted text transcriptions, usable for end-to-end automatic speech recognition (ASR). SPGISpeech 2.0 consists of 3,780 additional hours of professionally transcribed earnings calls. Furthermore, the dataset contains call and speaker information for each audio snippet facilitating multi-talker ASR. We validate the utility of SPGISpeech 2.0 through improvements in speaker-tagged ASR performance of popular speech recognition models after fine-tuning on SPGISpeech 2.0. Released free for non-commercial use, we expect SPGISpeech 2.0 to foster advancements in speech recognition technologies and inspire a wide range of research applications.
- North America > United States (0.04)
- South America (0.04)
- North America > Central America (0.04)
- (3 more...)
Big tech has spent 155bn on AI this year. It's about to spend hundreds of billions more
The US's largest companies have spent 2025 locked in a competition to spend more money than one another, lavishing 155bn on the development of artificial intelligence, more than the US government has spent on education, training, employment and social services in the 2025 fiscal year so far. Based on the most recent financial disclosures of Silicon Valley's biggest players, the race is about to accelerate to hundreds of billions in a single year. Over the past two weeks, Meta, Microsoft, Amazon, and Alphabet, Google's parent, have shared their quarterly public financial reports. Each disclosed that their year-to-date capital expenditure, a figure that refers to the money companies spend to acquire or upgrade tangible assets, already totals tens of billions. Capex, as the term is abbreviated, is a proxy for technology companies' spending on AI because the technology requires gargantuan investments in physical infrastructure, namely data centers, which require large amounts of power, water and expensive semiconductor chips.
- North America > United States > California (0.25)
- North America > United States > New York > New York County > New York City (0.05)
- Asia > China (0.05)
- Information Technology > Services (0.39)
- Government > Regional Government (0.36)
Apple quietens Wall Street's fears of China struggles and slow AI progress
Apple has been under pressure this year. It's playing catch-up to its fellow tech giants on artificial intelligence, it's seen its stock fall by double digits since the year began, it closed a store in China for the first time ever this week, and looming US tariffs on Beijing threaten its supply chain. On Thursday, the company released its third-quarter earnings of the fiscal year as investors scrutinize how the iPhone maker might turn things around. Despite the gloomy outlook, the company is still worth more than 3tn, and it beat Wall Street's expectations for profit and revenue this quarter. Apple reported a massive 10% year-over-year increase in revenue to 94.04bn, and 1.57 per share in earnings.
- North America > United States > New York > New York County > New York City (0.63)
- Asia > China > Beijing > Beijing (0.26)
- Asia > India (0.07)
- Asia > Vietnam (0.06)
- Banking & Finance > Trading (0.77)
- Government > Regional Government > North America Government > United States Government (0.52)
Modeling Professionalism in Expert Questioning through Linguistic Differentiation
D'Agostino, Giulia, Chen, Chung-Chi
Professionalism is a crucial yet underexplored dimension of expert communication, particularly in high-stakes domains like finance. This paper investigates how linguistic features can be leveraged to model and evaluate professionalism in expert questioning. We introduce a novel annotation framework to quantify structural and pragmatic elements in financial analyst questions, such as discourse regulators, prefaces, and request types. Using both human-authored and large language model (LLM)-generated questions, we construct two datasets: one annotated for perceived professionalism and one labeled by question origin. We show that the same linguistic features correlate strongly with both human judgments and authorship origin, suggesting a shared stylistic foundation. Furthermore, a classifier trained solely on these interpretable features outperforms gemini-2.0 and SVM baselines in distinguishing expert-authored questions. Our findings demonstrate that professionalism is a learnable, domain-general construct that can be captured through linguistically grounded modeling.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Switzerland (0.04)
- Asia > South Korea (0.04)
- Asia > Japan (0.04)